Skip to content

feat(sdk): AI SDK custom useChat transport & chat.task harness#3173

Draft
ericallam wants to merge 50 commits intomainfrom
feature/tri-7532-ai-sdk-chat-transport-and-chat-task-system
Draft

feat(sdk): AI SDK custom useChat transport & chat.task harness#3173
ericallam wants to merge 50 commits intomainfrom
feature/tri-7532-ai-sdk-chat-transport-and-chat-task-system

Conversation

@ericallam
Copy link
Member

No description provided.

cursoragent and others added 27 commits March 4, 2026 13:48
New package that provides a custom AI SDK ChatTransport implementation
bridging Vercel AI SDK's useChat hook with Trigger.dev's durable task
execution and realtime streams.

Key exports:
- TriggerChatTransport class implementing ChatTransport<UIMessage>
- createChatTransport() factory function
- ChatTaskPayload type for task-side typing
- TriggerChatTransportOptions type

The transport triggers a Trigger.dev task with chat messages as payload,
then subscribes to the task's realtime stream to receive UIMessageChunk
data, which useChat processes natively.

Co-authored-by: Eric Allam <eric@trigger.dev>
Tests cover:
- Constructor with required and optional options
- sendMessages triggering task and returning UIMessageChunk stream
- Correct payload structure sent to trigger API
- Custom streamKey in stream URL
- Extra headers propagation
- reconnectToStream with existing and non-existing sessions
- createChatTransport factory function
- Error handling for API failures
- regenerate-message trigger type

Co-authored-by: Eric Allam <eric@trigger.dev>
- Cache ApiClient instance instead of creating per-call
- Add streamTimeoutSeconds option for customizable stream timeout
- Clean up subscribeToStream method (remove unused variable)
- Improve JSDoc with backend task example
- Minor code cleanup

Co-authored-by: Eric Allam <eric@trigger.dev>
Adds 3 additional test cases:
- Abort signal gracefully closes the stream
- Multiple independent chat sessions tracked correctly
- ChatRequestOptions.body is merged into task payload

Co-authored-by: Eric Allam <eric@trigger.dev>
Co-authored-by: Eric Allam <eric@trigger.dev>
ChatSessionState is an implementation detail of the transport's
session tracking. Users don't need to access it since the sessions
map is private.

Co-authored-by: Eric Allam <eric@trigger.dev>
The accessToken option now accepts either a string or a function
returning a string. This enables dynamic token refresh patterns:

  new TriggerChatTransport({
    taskId: 'my-task',
    accessToken: () => getLatestToken(),
  })

The function is called on each sendMessages() call, allowing fresh
tokens to be used for each task trigger.

Co-authored-by: Eric Allam <eric@trigger.dev>
Use the already-resolved token when creating ApiClient instead of
calling resolveAccessToken() again through getApiClient().

Co-authored-by: Eric Allam <eric@trigger.dev>
Two new subpath exports:

@trigger.dev/sdk/chat (frontend, browser-safe):
- TriggerChatTransport — ChatTransport implementation for useChat
- createChatTransport() — factory function
- TriggerChatTransportOptions type

@trigger.dev/sdk/ai (backend, adds to existing ai.tool/ai.currentToolOptions):
- chatTask() — pre-typed task wrapper with auto-pipe
- pipeChat() — pipe StreamTextResult to realtime stream
- CHAT_STREAM_KEY constant
- ChatTaskPayload type
- ChatTaskOptions type
- PipeChatOptions type

Co-authored-by: Eric Allam <eric@trigger.dev>
Move and adapt tests from packages/ai to packages/trigger-sdk.
- Import from ./chat.js instead of ./transport.js
- Use 'task' option instead of 'taskId'
- All 17 tests passing

Co-authored-by: Eric Allam <eric@trigger.dev>
All functionality now lives in:
- @trigger.dev/sdk/chat (frontend transport)
- @trigger.dev/sdk/ai (backend chatTask, pipeChat)

Co-authored-by: Eric Allam <eric@trigger.dev>
Co-authored-by: Eric Allam <eric@trigger.dev>
1. Add null/object guard before enqueuing UIMessageChunk from SSE stream
   to handle heartbeat or malformed events safely
2. Use incrementing counter instead of Date.now() in test message
   factories to avoid duplicate IDs
3. Add test covering publicAccessToken from trigger response being used
   for stream subscription auth

Co-authored-by: Eric Allam <eric@trigger.dev>
Comprehensive guide covering:
- Quick start with chatTask + TriggerChatTransport
- Backend patterns: simple (return streamText), complex (pipeChat),
  and manual (task + ChatTaskPayload)
- Frontend options: dynamic tokens, extra data, self-hosting
- ChatTaskPayload reference
- Added to Writing tasks navigation near Streams

Co-authored-by: Eric Allam <eric@trigger.dev>
Minimal example showcasing the new chatTask + TriggerChatTransport APIs:
- Backend: chatTask with streamText auto-pipe (src/trigger/chat.ts)
- Frontend: TriggerChatTransport with useChat (src/components/chat.tsx)
- Token generation via auth.createTriggerPublicToken (src/app/page.tsx)
- Tailwind v4 styling

Co-authored-by: Eric Allam <eric@trigger.dev>
…delMessages

@ai-sdk/openai v3 and @ai-sdk/react v3 are needed for ai v6 compatibility.
convertToModelMessages is async in newer AI SDK versions.

Co-authored-by: Eric Allam <eric@trigger.dev>
@changeset-bot
Copy link

changeset-bot bot commented Mar 4, 2026

🦋 Changeset detected

Latest commit: 0415ccb

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 29 packages
Name Type
@trigger.dev/sdk Minor
@trigger.dev/python Minor
@internal/sdk-compat-tests Patch
references-ai-chat Patch
d3-chat Patch
references-d3-openai-agents Patch
references-nextjs-realtime Patch
references-realtime-hooks-test Patch
references-realtime-streams Patch
references-telemetry Patch
@trigger.dev/build Minor
@trigger.dev/core Minor
@trigger.dev/react-hooks Minor
@trigger.dev/redis-worker Minor
@trigger.dev/rsc Minor
@trigger.dev/schema-to-json Minor
@trigger.dev/database Minor
@trigger.dev/otlp-importer Minor
trigger.dev Minor
@internal/cache Patch
@internal/clickhouse Patch
@internal/redis Patch
@internal/replication Patch
@internal/run-engine Patch
@internal/schedule-engine Patch
@internal/testcontainers Patch
@internal/tracing Patch
@internal/tsql Patch
@internal/zod-worker Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds a browser-safe chat transport and factory (TriggerChatTransport, createChatTransport) and a React hook (useTriggerChatTransport) under @trigger.dev/sdk/chat. Extends the backend AI SDK (@trigger.dev/sdk/ai) with chat primitives (chatTask, pipeChat, createChatAccessToken, CHAT_STREAM_KEY), many chat-related types, and runtime helpers. Implements per-item oversized NDJSON handling (OversizedItemMarker, extractIndexAndTask) and removes BatchItemTooLargeError/related size checks. Adds InputStreamManager methods (setLastSeqNum, shiftBuffer, disconnectStream) and introduces StreamWriteResult and new realtime options (spanName, collapsed). Updates package exports, docs, tests, and package-installation guidance.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~150 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely missing. The author provided no description content, violating the template requirement for testing details, changelog, and confirmation of following contributing guidelines. Add a detailed PR description including testing steps, a changelog summary, and confirmation that contributing guidelines were followed per the provided template.
Docstring Coverage ⚠️ Warning Docstring coverage is 68.18% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly summarizes the main change: introducing AI SDK custom useChat transport and chat.task harness, which aligns with the extensive additions across chat transport, backend task handling, and React integration.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/tri-7532-ai-sdk-chat-transport-and-chat-task-system

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/trigger-sdk/src/v3/streams.ts (1)

780-807: ⚠️ Potential issue | 🔴 Critical

Potential input event loss when advancing sequence after disconnect.

disconnectStream() aborts tailing asynchronously, so new tail events can still advance seq before shutdown. Then setLastSeqNum(prev + 1) at Line 806 can skip unseen events on reconnect.

🛠️ Safer local mitigation
-              inputStreams.disconnectStream(opts.id);
+              const seqBeforeDisconnect = inputStreams.lastSeqNum(opts.id);
+              inputStreams.disconnectStream(opts.id);

               // 3. Suspend the task
               const waitResult = await runtime.waitUntil(response.waitpointId);

@@
-                const prevSeq = inputStreams.lastSeqNum(opts.id);
-                inputStreams.setLastSeqNum(opts.id, (prevSeq ?? -1) + 1);
+                const seqAfterWait = inputStreams.lastSeqNum(opts.id);
+                if (seqAfterWait === seqBeforeDisconnect) {
+                  inputStreams.setLastSeqNum(opts.id, (seqBeforeDisconnect ?? -1) + 1);
+                }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/trigger-sdk/src/v3/streams.ts` around lines 780 - 807,
disconnectStream aborts tailing asynchronously so advancing seq using the stale
prevSeq can skip events that arrived during shutdown; modify the logic around
inputStreams.disconnectStream(opts.id) + inputStreams.setLastSeqNum(...) to
first await/ensure the stream is fully stopped (e.g., await the disconnect
promise or wait for a closed/aborted signal for opts.id), then read the latest
sequence via inputStreams.lastSeqNum(opts.id) and setLastSeqNum to (latestSeq ??
-1) + 1 (or otherwise max(prev+1, latest+1)) so you never advance past events
that arrived during the asynchronous disconnect; reference disconnectStream,
waitUntil, inputStreams.lastSeqNum, inputStreams.setLastSeqNum and opts.id when
locating the change.
🧹 Nitpick comments (3)
packages/trigger-sdk/src/v3/ai.ts (1)

718-731: Consider logging a warning when token creation fails.

If token creation fails, turnAccessToken remains an empty string and is passed to lifecycle hooks and the turn-complete chunk. The frontend may fail to reconnect. Consider logging a warning to aid debugging:

               } catch {
-                // Token creation failed
+                console.warn(`[chatTask] Failed to create access token for run ${currentRunId}`);
               }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/trigger-sdk/src/v3/ai.ts` around lines 718 - 731, The token creation
catch block swallows errors, leaving turnAccessToken empty and making frontend
reconnection debugging hard; update the catch in the currentRunId branch around
auth.createPublicToken (using chatAccessTokenTTL) to log a warning via the
existing logger (or processLogger) that includes the caught error and context
(e.g., currentRunId and that turnAccessToken will be empty) so lifecycle hooks
and the turn-complete chunk consumers can be diagnosed easily.
docs/guides/ai-chat.mdx (1)

218-231: Manual mode example missing message accumulation caveat.

The manual mode example uses a regular task() with ChatTaskPayload, but this won't get automatic message accumulation across turns. The warning at lines 233-235 mentions this, but the example itself doesn't show how to handle multi-turn conversations manually.

Consider adding a note that manual mode is single-turn only, or show how to implement accumulation manually:

 export const manualChat = task({
   id: "manual-chat",
   retry: { maxAttempts: 3 },
   queue: { concurrencyLimit: 10 },
-  run: async (payload: ChatTaskPayload) => {
+  // Note: This is single-turn only. For multi-turn, use chat.task()
+  run: async (payload: ChatTaskPayload) => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/ai-chat.mdx` around lines 218 - 231, The manualChat task example
uses task() with ChatTaskPayload and calls streamText(...) then await
chat.pipe(result) but doesn't show that this is single-turn only; update the
example or add a short note: either state that manual mode (manualChat) does not
auto-accumulate messages across turns, or modify the task to explicitly
accumulate prior turns by merging stored conversation history with
payload.messages before calling streamText (i.e., gather previousMessages +
payload.messages into the messages argument), then pipe the result as shown;
reference manualChat, ChatTaskPayload, streamText, and chat.pipe so readers can
locate and implement the accumulation if they want multi-turn behavior.
packages/core/src/v3/realtimeStreams/types.ts (1)

33-40: Prefer type aliases for the updated stream contracts.

Since these signatures were touched, please migrate these interfaces to type aliases to match repo conventions.

As per coding guidelines: **/*.{ts,tsx}: Use types over interfaces for TypeScript.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/v3/realtimeStreams/types.ts` around lines 33 - 40, Replace
the two exported interfaces with exported type aliases: change
RealtimeStreamInstance and StreamsWriter from interface declarations to type
aliases while preserving their shapes and member names (ensure
RealtimeStreamInstance still exposes wait(): Promise<StreamWriteResult> and a
getter stream: AsyncIterableStream<T>, and StreamsWriter still exposes wait():
Promise<StreamWriteResult>); keep the same generic parameter T and export names
so public API is unchanged and update any references if needed to the new type
aliases.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/guides/ai-chat.mdx`:
- Around line 804-817: The docs table for ChatTaskOptions is missing the
chatAccessTokenTTL entry; update the ChatTaskOptions table in
docs/guides/ai-chat.mdx to include a row for `chatAccessTokenTTL` (property on
the ChatTaskOptions type) with Type `string`, Default `"1h"`, and a short
Description like "TTL for generated chat access tokens" so it matches the
`chatAccessTokenTTL` field defined in the ChatTaskOptions type in ai.ts.

---

Outside diff comments:
In `@packages/trigger-sdk/src/v3/streams.ts`:
- Around line 780-807: disconnectStream aborts tailing asynchronously so
advancing seq using the stale prevSeq can skip events that arrived during
shutdown; modify the logic around inputStreams.disconnectStream(opts.id) +
inputStreams.setLastSeqNum(...) to first await/ensure the stream is fully
stopped (e.g., await the disconnect promise or wait for a closed/aborted signal
for opts.id), then read the latest sequence via inputStreams.lastSeqNum(opts.id)
and setLastSeqNum to (latestSeq ?? -1) + 1 (or otherwise max(prev+1, latest+1))
so you never advance past events that arrived during the asynchronous
disconnect; reference disconnectStream, waitUntil, inputStreams.lastSeqNum,
inputStreams.setLastSeqNum and opts.id when locating the change.

---

Nitpick comments:
In `@docs/guides/ai-chat.mdx`:
- Around line 218-231: The manualChat task example uses task() with
ChatTaskPayload and calls streamText(...) then await chat.pipe(result) but
doesn't show that this is single-turn only; update the example or add a short
note: either state that manual mode (manualChat) does not auto-accumulate
messages across turns, or modify the task to explicitly accumulate prior turns
by merging stored conversation history with payload.messages before calling
streamText (i.e., gather previousMessages + payload.messages into the messages
argument), then pipe the result as shown; reference manualChat, ChatTaskPayload,
streamText, and chat.pipe so readers can locate and implement the accumulation
if they want multi-turn behavior.

In `@packages/core/src/v3/realtimeStreams/types.ts`:
- Around line 33-40: Replace the two exported interfaces with exported type
aliases: change RealtimeStreamInstance and StreamsWriter from interface
declarations to type aliases while preserving their shapes and member names
(ensure RealtimeStreamInstance still exposes wait(): Promise<StreamWriteResult>
and a getter stream: AsyncIterableStream<T>, and StreamsWriter still exposes
wait(): Promise<StreamWriteResult>); keep the same generic parameter T and
export names so public API is unchanged and update any references if needed to
the new type aliases.

In `@packages/trigger-sdk/src/v3/ai.ts`:
- Around line 718-731: The token creation catch block swallows errors, leaving
turnAccessToken empty and making frontend reconnection debugging hard; update
the catch in the currentRunId branch around auth.createPublicToken (using
chatAccessTokenTTL) to log a warning via the existing logger (or processLogger)
that includes the caught error and context (e.g., currentRunId and that
turnAccessToken will be empty) so lifecycle hooks and the turn-complete chunk
consumers can be diagnosed easily.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: ebda7b1b-2998-4f72-a46e-4207104ed513

📥 Commits

Reviewing files that changed from the base of the PR and between 5ee5959 and cfe56ab.

⛔ Files ignored due to path filters (4)
  • references/ai-chat/src/app/actions.ts is excluded by !references/**
  • references/ai-chat/src/components/chat-app.tsx is excluded by !references/**
  • references/ai-chat/src/components/chat.tsx is excluded by !references/**
  • references/ai-chat/src/trigger/chat.ts is excluded by !references/**
📒 Files selected for processing (10)
  • docs/guides/ai-chat.mdx
  • packages/core/src/v3/realtimeStreams/manager.ts
  • packages/core/src/v3/realtimeStreams/noopManager.ts
  • packages/core/src/v3/realtimeStreams/streamInstance.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV1.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV2.ts
  • packages/core/src/v3/realtimeStreams/types.ts
  • packages/trigger-sdk/src/v3/ai.ts
  • packages/trigger-sdk/src/v3/chat.ts
  • packages/trigger-sdk/src/v3/streams.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (29)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
  • GitHub Check: sdk-compat / Node.js 22.12 (ubuntu-latest)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
  • GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
  • GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
  • GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
  • GitHub Check: sdk-compat / Cloudflare Workers
  • GitHub Check: sdk-compat / Deno Runtime
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
  • GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
  • GitHub Check: typecheck / typecheck
  • GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
  • GitHub Check: sdk-compat / Node.js 20.20 (ubuntu-latest)
  • GitHub Check: sdk-compat / Bun Runtime
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (python)
🧰 Additional context used
📓 Path-based instructions (10)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead

**/*.{ts,tsx}: In TypeScript SDK usage, always import from @trigger.dev/sdk, never from @trigger.dev/sdk/v3 or use deprecated client.defineJob
Import from @trigger.dev/core subpaths only, never from the root
Use the Run Engine 2.0 (@internal/run-engine) and redis-worker for all new work, not legacy V1 MarQS queue or deprecated V1 functions

Files:

  • packages/core/src/v3/realtimeStreams/manager.ts
  • packages/trigger-sdk/src/v3/chat.ts
  • packages/core/src/v3/realtimeStreams/streamInstance.ts
  • packages/core/src/v3/realtimeStreams/noopManager.ts
  • packages/core/src/v3/realtimeStreams/types.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV2.ts
  • packages/trigger-sdk/src/v3/streams.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV1.ts
  • packages/trigger-sdk/src/v3/ai.ts
{packages/core,apps/webapp}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use zod for validation in packages/core and apps/webapp

Files:

  • packages/core/src/v3/realtimeStreams/manager.ts
  • packages/core/src/v3/realtimeStreams/streamInstance.ts
  • packages/core/src/v3/realtimeStreams/noopManager.ts
  • packages/core/src/v3/realtimeStreams/types.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV2.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV1.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use function declarations instead of default exports

Files:

  • packages/core/src/v3/realtimeStreams/manager.ts
  • packages/trigger-sdk/src/v3/chat.ts
  • packages/core/src/v3/realtimeStreams/streamInstance.ts
  • packages/core/src/v3/realtimeStreams/noopManager.ts
  • packages/core/src/v3/realtimeStreams/types.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV2.ts
  • packages/trigger-sdk/src/v3/streams.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV1.ts
  • packages/trigger-sdk/src/v3/ai.ts
**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/otel-metrics.mdc)

**/*.ts: When creating or editing OTEL metrics (counters, histograms, gauges), ensure metric attributes have low cardinality by using only enums, booleans, bounded error codes, or bounded shard IDs
Do not use high-cardinality attributes in OTEL metrics such as UUIDs/IDs (envId, userId, runId, projectId, organizationId), unbounded integers (itemCount, batchSize, retryCount), timestamps (createdAt, startTime), or free-form strings (errorMessage, taskName, queueName)
When exporting OTEL metrics via OTLP to Prometheus, be aware that the exporter automatically adds unit suffixes to metric names (e.g., 'my_duration_ms' becomes 'my_duration_ms_milliseconds', 'my_counter' becomes 'my_counter_total'). Account for these transformations when writing Grafana dashboards or Prometheus queries

Files:

  • packages/core/src/v3/realtimeStreams/manager.ts
  • packages/trigger-sdk/src/v3/chat.ts
  • packages/core/src/v3/realtimeStreams/streamInstance.ts
  • packages/core/src/v3/realtimeStreams/noopManager.ts
  • packages/core/src/v3/realtimeStreams/types.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV2.ts
  • packages/trigger-sdk/src/v3/streams.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV1.ts
  • packages/trigger-sdk/src/v3/ai.ts
**/*.{js,ts,jsx,tsx,json,md,yaml,yml}

📄 CodeRabbit inference engine (AGENTS.md)

Format code using Prettier before committing

Files:

  • packages/core/src/v3/realtimeStreams/manager.ts
  • packages/trigger-sdk/src/v3/chat.ts
  • packages/core/src/v3/realtimeStreams/streamInstance.ts
  • packages/core/src/v3/realtimeStreams/noopManager.ts
  • packages/core/src/v3/realtimeStreams/types.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV2.ts
  • packages/trigger-sdk/src/v3/streams.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV1.ts
  • packages/trigger-sdk/src/v3/ai.ts
packages/core/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (packages/core/CLAUDE.md)

Never import the root package (@trigger.dev/core). Always use subpath imports such as @trigger.dev/core/v3, @trigger.dev/core/v3/utils, @trigger.dev/core/logger, or @trigger.dev/core/schemas

Files:

  • packages/core/src/v3/realtimeStreams/manager.ts
  • packages/core/src/v3/realtimeStreams/streamInstance.ts
  • packages/core/src/v3/realtimeStreams/noopManager.ts
  • packages/core/src/v3/realtimeStreams/types.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV2.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV1.ts
docs/**/*.{md,mdx}

📄 CodeRabbit inference engine (CLAUDE.md)

Docs in docs/ directory should use Mintlify MDX format following conventions in docs/CLAUDE.md

Files:

  • docs/guides/ai-chat.mdx
docs/**/*.mdx

📄 CodeRabbit inference engine (docs/CLAUDE.md)

docs/**/*.mdx: MDX documentation pages must include frontmatter with title (required), description (required), and sidebarTitle (optional) in YAML format
Use Mintlify components for structured content: , , , , , , /, /
Always import from @trigger.dev/sdk in code examples (never from @trigger.dev/sdk/v3)
Code examples must be complete and runnable where possible
Use language tags in code fences: typescript, bash, json

Files:

  • docs/guides/ai-chat.mdx
packages/trigger-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

In the Trigger.dev SDK (packages/trigger-sdk), prefer isomorphic code like fetch and ReadableStream instead of Node.js-specific code

Files:

  • packages/trigger-sdk/src/v3/chat.ts
  • packages/trigger-sdk/src/v3/streams.ts
  • packages/trigger-sdk/src/v3/ai.ts
packages/trigger-sdk/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (packages/trigger-sdk/CLAUDE.md)

Always import from @trigger.dev/sdk. Never use @trigger.dev/sdk/v3 (deprecated path alias)

Files:

  • packages/trigger-sdk/src/v3/chat.ts
  • packages/trigger-sdk/src/v3/streams.ts
  • packages/trigger-sdk/src/v3/ai.ts
🧠 Learnings (19)
📓 Common learnings
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: The SDK at packages/trigger-sdk is an isomorphic TypeScript SDK
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to packages/trigger-sdk/**/*.{ts,tsx} : In the Trigger.dev SDK (packages/trigger-sdk), prefer isomorphic code like fetch and ReadableStream instead of Node.js-specific code
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Use `trigger.dev/sdk/v3` for all imports in Trigger.dev tasks
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Use `.withStreams()` to subscribe to realtime streams from task metadata in addition to run changes
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: packages/trigger-sdk/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:48.124Z
Learning: Applies to packages/trigger-sdk/**/*.{ts,tsx,js,jsx} : Always import from `trigger.dev/sdk`. Never use `trigger.dev/sdk/v3` (deprecated path alias)
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Use `trigger.dev/react-hooks` package for realtime subscriptions in React components
📚 Learning: 2025-11-27T16:27:35.304Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Use `.withStreams()` to subscribe to realtime streams from task metadata in addition to run changes

Applied to files:

  • packages/core/src/v3/realtimeStreams/manager.ts
  • docs/guides/ai-chat.mdx
  • packages/trigger-sdk/src/v3/chat.ts
  • packages/core/src/v3/realtimeStreams/types.ts
  • packages/core/src/v3/realtimeStreams/streamsWriterV2.ts
  • packages/trigger-sdk/src/v3/streams.ts
  • packages/trigger-sdk/src/v3/ai.ts
📚 Learning: 2026-03-02T12:43:34.140Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: packages/cli-v3/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:34.140Z
Learning: Keep SDK documentation in `rules/` and `.claude/skills/trigger-dev-tasks/` synchronized when features are added or changed

Applied to files:

  • docs/guides/ai-chat.mdx
📚 Learning: 2026-03-02T12:43:37.906Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: packages/core/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:37.906Z
Learning: Exercise caution with changes to trigger.dev/core as they affect both the customer-facing SDK and server-side webapp - breaking changes can impact deployed user tasks and the platform simultaneously

Applied to files:

  • docs/guides/ai-chat.mdx
📚 Learning: 2026-03-02T12:43:48.124Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: packages/trigger-sdk/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:48.124Z
Learning: Applies to packages/trigger-sdk/**/*.{ts,tsx,js,jsx} : Always import from `trigger.dev/sdk`. Never use `trigger.dev/sdk/v3` (deprecated path alias)

Applied to files:

  • docs/guides/ai-chat.mdx
  • packages/trigger-sdk/src/v3/chat.ts
  • packages/trigger-sdk/src/v3/ai.ts
📚 Learning: 2026-03-02T12:43:02.539Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: docs/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:02.539Z
Learning: Applies to docs/**/*.mdx : Always import from `trigger.dev/sdk` in code examples (never from `trigger.dev/sdk/v3`)

Applied to files:

  • docs/guides/ai-chat.mdx
📚 Learning: 2025-11-27T16:27:35.304Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Use the `task()` function from `trigger.dev/sdk/v3` to define tasks with id and run properties

Applied to files:

  • docs/guides/ai-chat.mdx
  • packages/trigger-sdk/src/v3/ai.ts
📚 Learning: 2025-11-27T16:27:35.304Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Use `yourTask.trigger()` to trigger a task from inside another task with specified payload

Applied to files:

  • docs/guides/ai-chat.mdx
📚 Learning: 2025-11-27T16:27:35.304Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Use `trigger.dev/sdk/v3` for all imports in Trigger.dev tasks

Applied to files:

  • packages/trigger-sdk/src/v3/chat.ts
  • packages/trigger-sdk/src/v3/ai.ts
📚 Learning: 2025-11-27T16:26:37.432Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: The SDK at packages/trigger-sdk is an isomorphic TypeScript SDK

Applied to files:

  • packages/trigger-sdk/src/v3/chat.ts
📚 Learning: 2025-11-27T16:26:37.432Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to packages/trigger-sdk/**/*.{ts,tsx} : In the Trigger.dev SDK (packages/trigger-sdk), prefer isomorphic code like fetch and ReadableStream instead of Node.js-specific code

Applied to files:

  • packages/trigger-sdk/src/v3/chat.ts
  • packages/trigger-sdk/src/v3/streams.ts
📚 Learning: 2025-11-27T16:27:35.304Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Use `schemaTask()` from `trigger.dev/sdk/v3` with Zod schema for payload validation

Applied to files:

  • packages/trigger-sdk/src/v3/ai.ts
📚 Learning: 2026-03-02T12:42:41.110Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-03-02T12:42:41.110Z
Learning: Applies to **/*.{ts,tsx} : In TypeScript SDK usage, always import from trigger.dev/sdk, never from trigger.dev/sdk/v3 or use deprecated client.defineJob

Applied to files:

  • packages/trigger-sdk/src/v3/ai.ts
📚 Learning: 2026-03-03T13:08:03.862Z
Learnt from: ericallam
Repo: triggerdotdev/trigger.dev PR: 3166
File: packages/redis-worker/src/fair-queue/index.ts:1114-1121
Timestamp: 2026-03-03T13:08:03.862Z
Learning: In packages/redis-worker/src/fair-queue/index.ts, it's acceptable for the worker queue depth cap check to allow overshooting by up to batchClaimSize messages per iteration, as the next iteration will recheck and prevent sustained growth beyond the limit.

Applied to files:

  • packages/trigger-sdk/src/v3/ai.ts
📚 Learning: 2025-11-27T16:27:35.304Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Attach metadata to task runs using the metadata option when triggering, and access/update it inside runs using metadata functions

Applied to files:

  • packages/trigger-sdk/src/v3/ai.ts
📚 Learning: 2025-11-27T16:27:35.304Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Use metadata methods (set, del, replace, append, remove, increment, decrement, stream, flush) to update metadata during task execution

Applied to files:

  • packages/trigger-sdk/src/v3/ai.ts
📚 Learning: 2025-11-27T16:27:35.304Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Export tasks with unique IDs within the project to enable proper task discovery and execution

Applied to files:

  • packages/trigger-sdk/src/v3/ai.ts
📚 Learning: 2025-11-27T16:27:35.304Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Use logger methods (debug, log, info, warn, error) from `trigger.dev/sdk/v3` for structured logging in tasks

Applied to files:

  • packages/trigger-sdk/src/v3/ai.ts
📚 Learning: 2025-11-27T16:27:35.304Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/writing-tasks.mdc:0-0
Timestamp: 2025-11-27T16:27:35.304Z
Learning: Applies to **/trigger/**/*.{ts,tsx,js,jsx} : Use `tasks.trigger()` with type-only imports to trigger tasks from backend code without importing the task implementation

Applied to files:

  • packages/trigger-sdk/src/v3/ai.ts
🔇 Additional comments (19)
packages/trigger-sdk/src/v3/chat.ts (4)

278-279: Pass headers option to sendInputStream API call.

The extraHeaders stored on the transport are used for SSE subscriptions (Line 415) but not passed to the sendInputStream call here. This breaks proxy/tenant routing or custom auth flows that depend on these headers.

Proposed fix
         const apiClient = new ApiClient(this.baseURL, session.publicAccessToken);
-        await apiClient.sendInputStream(session.runId, CHAT_MESSAGES_STREAM_ID, minimalPayload);
+        await apiClient.sendInputStream(session.runId, CHAT_MESSAGES_STREAM_ID, minimalPayload, {
+          headers: this.extraHeaders,
+        });

297-302: Pass headers option to triggerTask API call.

Same issue as sendInputStream — the extraHeaders are not forwarded to the triggerTask call, breaking custom auth or proxy routing.

Proposed fix
-    const triggerResponse = await apiClient.triggerTask(this.taskId, {
-      payload,
-      options: {
-        payloadType: "application/json",
-      },
-    });
+    const triggerResponse = await apiClient.triggerTask(
+      this.taskId,
+      {
+        payload,
+        options: {
+          payloadType: "application/json",
+        },
+      },
+      undefined,
+      { headers: this.extraHeaders }
+    );

438-441: Pass headers option to stop signal sendInputStream call.

The stop signal path also omits extraHeaders.

Proposed fix
             const api = new ApiClient(this.baseURL, session.publicAccessToken);
             api
-              .sendInputStream(session.runId, CHAT_STOP_STREAM_ID, { stop: true })
+              .sendInputStream(session.runId, CHAT_STOP_STREAM_ID, { stop: true }, {
+                headers: this.extraHeaders,
+              })
               .catch(() => {}); // Best-effort

459-550: LGTM — Stream subscription and chunk processing logic.

The SSE stream subscription correctly:

  • Handles __trigger_turn_complete control chunks and token refresh
  • Tracks lastEventId for stream resumption
  • Manages abort signals and graceful shutdown
  • Skips leftover chunks after stop via skipToTurnComplete
packages/trigger-sdk/src/v3/ai.ts (6)

247-248: Global _chatPipeCount is race-prone with concurrent chat runs.

_chatPipeCount is module-global, so concurrent chat runs can mutate the same counter and incorrectly suppress or trigger auto-piping in other runs.

Suggested fix — scope counter per run
-let _chatPipeCount = 0;
+const _chatPipeCountByRun = new Map<string, number>();

 async function pipeChat(
   source: UIMessageStreamable | AsyncIterable<unknown> | ReadableStream<unknown>,
   options?: PipeChatOptions
 ): Promise<void> {
-  _chatPipeCount++;
+  const runId = taskContext.ctx?.run.id;
+  if (runId) {
+    _chatPipeCountByRun.set(runId, (_chatPipeCountByRun.get(runId) ?? 0) + 1);
+  }
   // ...
 }

Then in the chatTask turn loop, reset and check the run-scoped counter instead of the global.


453-552: chatTask does not support custom streamKey end-to-end.

TriggerChatTransport allows configuring a custom streamKey, but chatTask hardcodes CHAT_STREAM_KEY for auto-piping (Line 801) and the turn-complete control chunk (Line 1047). If a user configures a non-default stream key on the frontend, the turn completion will hang because the frontend subscribes to a different stream than where the backend writes.

Suggested fix — add `streamKey` option to `chatTask`

Add a streamKey option to ChatTaskOptions and thread it through to pipeChat and writeTurnCompleteChunk:

 export type ChatTaskOptions<TIdentifier extends string> = Omit<...> & {
+  /** Stream key for output. Must match TriggerChatTransport.streamKey. `@default` "chat" */
+  streamKey?: string;
   run: (payload: ChatTaskRunPayload) => Promise<unknown>;
   // ...
 };

 function chatTask<TIdentifier extends string>(...) {
   const {
     run: userRun,
+    streamKey = CHAT_STREAM_KEY,
     // ...
   } = options;

   // In auto-pipe:
-  await pipeChat(uiStream, { signal: combinedSignal, spanName: "stream response" });
+  await pipeChat(uiStream, { streamKey, signal: combinedSignal, spanName: "stream response" });

   // In turn complete:
-  const turnCompleteResult = await writeTurnCompleteChunk(...);
+  const turnCompleteResult = await writeTurnCompleteChunk(streamKey, ...);
 }

-async function writeTurnCompleteChunk(chatId?: string, publicAccessToken?: string) {
-  const { waitUntilComplete } = streams.writer(CHAT_STREAM_KEY, {
+async function writeTurnCompleteChunk(streamKey: string = CHAT_STREAM_KEY, chatId?: string, publicAccessToken?: string) {
+  const { waitUntilComplete } = streams.writer(streamKey, {

880-884: Verify handling of multiple buffered messages.

When messages arrive during streaming, they're buffered in pendingMessages. However, only the first message is used (Line 882) — any additional messages are discarded. If multiple messages arrive during a streaming turn, subsequent ones will be lost.

Consider preserving the backlog:

+      const queuedMessages: ChatTaskWirePayload[] = [];
+
       // ... inside turn loop after processing ...
       if (pendingMessages.length > 0) {
-        currentWirePayload = pendingMessages[0]!;
+        currentWirePayload = pendingMessages.shift()!;
+        queuedMessages.push(...pendingMessages);
         return "continue";
       }
+
+      // Check queued messages before waiting
+      if (queuedMessages.length > 0) {
+        currentWirePayload = queuedMessages.shift()!;
+        return "continue";
+      }

The past review indicated this was addressed. Please verify the intended behavior — is dropping extra messages intentional (e.g., expecting only one message per turn)?


734-777: LGTM — Lifecycle hooks implementation.

The onChatStart and onTurnStart hooks are correctly sequenced and wrapped in traced spans. The scoped access token is minted per-turn and passed to callbacks for persistence.


691-713: LGTM — Message accumulation logic.

The accumulation correctly handles:

  • Turn 0: Full history from frontend initializes the accumulator
  • Regenerate: Full history resets the accumulator (removing last assistant message)
  • Submit (turn 1+): Only new messages appended to existing accumulator

This aligns with the frontend transport sending minimal payloads after the first turn.


1096-1115: LGTM — stripProviderMetadata helper.

This correctly strips ephemeral OpenAI itemId fields that would cause 404 errors when sent back in subsequent streamText calls. The implementation preserves other provider metadata while removing only the problematic fields.

docs/guides/ai-chat.mdx (2)

20-22: Verify the ai package version requirement.

The documentation states the ai package v5.0.0 or later is required. A past review indicated the reference implementation uses v6.0.0. Please verify and update the version requirement if needed.

#!/bin/bash
# Check ai package version in reference implementations and package.json files
rg -n '"ai"\s*:' --glob '**/package.json' | head -20

344-406: LGTM — Persistence documentation.

The persistence section clearly explains:

  • What needs to be persisted (messages + sessions)
  • Server-side persistence via onTurnStart (before streaming) and onTurnComplete (after)
  • The purpose of lastEventId for stream resumption
  • Complete code examples with database operations
packages/core/src/v3/realtimeStreams/types.ts (1)

29-31: StreamWriteResult is a clean API extension.

This shape keeps backward compatibility while enabling resume metadata.

packages/core/src/v3/realtimeStreams/manager.ts (1)

19-21: Type propagation for active stream waits is consistent.

Good alignment with the new wait(): Promise<StreamWriteResult> contract.

packages/core/src/v3/realtimeStreams/noopManager.ts (1)

18-18: Noop manager now matches the wait contract.

Returning an object here correctly aligns with StreamWriteResult.

packages/core/src/v3/realtimeStreams/streamsWriterV1.ts (1)

261-264: wait() return type update is correctly wired.

This keeps V1 aligned with the shared writer contract.

packages/core/src/v3/realtimeStreams/streamInstance.ts (1)

66-69: StreamInstance.wait() now correctly preserves writer output.

Returning writer.wait() is the right propagation point for StreamWriteResult.

packages/core/src/v3/realtimeStreams/streamsWriterV2.ts (1)

173-176: Capturing lastSeqNum and exposing it from wait() is a solid improvement.

This makes V2 waits materially more useful for resume flows.

Also applies to: 188-191

packages/trigger-sdk/src/v3/streams.ts (1)

143-201: Span customization and waitpoint entity attribution are well integrated.

spanName/collapsed wiring and waitpointId attribution improve trace clarity.

Also applies to: 757-775, 822-823

Comment on lines +804 to +817
### ChatTaskOptions

| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `id` | `string` | required | Task identifier |
| `run` | `(payload: ChatTaskRunPayload) => Promise<unknown>` | required | Handler for each turn |
| `onChatStart` | `(event: ChatStartEvent) => Promise<void> \| void` | — | Fires on turn 0 before `run()` |
| `onTurnStart` | `(event: TurnStartEvent) => Promise<void> \| void` | — | Fires every turn before `run()` |
| `onTurnComplete` | `(event: TurnCompleteEvent) => Promise<void> \| void` | — | Fires after each turn completes |
| `maxTurns` | `number` | `100` | Max conversational turns per run |
| `turnTimeout` | `string` | `"1h"` | How long to wait for next message |
| `warmTimeoutInSeconds` | `number` | `30` | Seconds to stay warm before suspending |

Plus all standard [TaskOptions](/tasks/overview) — `retry`, `queue`, `machine`, `maxDuration`, etc.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add chatAccessTokenTTL to ChatTaskOptions table.

The ChatTaskOptions type in ai.ts (lines 542-552) includes chatAccessTokenTTL with default "1h", but it's missing from the documentation table.

 | `turnTimeout` | `string` | `"1h"` | How long to wait for next message |
 | `warmTimeoutInSeconds` | `number` | `30` | Seconds to stay warm before suspending |
+| `chatAccessTokenTTL` | `string` | `"1h"` | How long the scoped access token remains valid |
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### ChatTaskOptions
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `id` | `string` | required | Task identifier |
| `run` | `(payload: ChatTaskRunPayload) => Promise<unknown>` | required | Handler for each turn |
| `onChatStart` | `(event: ChatStartEvent) => Promise<void> \| void` || Fires on turn 0 before `run()` |
| `onTurnStart` | `(event: TurnStartEvent) => Promise<void> \| void` || Fires every turn before `run()` |
| `onTurnComplete` | `(event: TurnCompleteEvent) => Promise<void> \| void` || Fires after each turn completes |
| `maxTurns` | `number` | `100` | Max conversational turns per run |
| `turnTimeout` | `string` | `"1h"` | How long to wait for next message |
| `warmTimeoutInSeconds` | `number` | `30` | Seconds to stay warm before suspending |
Plus all standard [TaskOptions](/tasks/overview)`retry`, `queue`, `machine`, `maxDuration`, etc.
### ChatTaskOptions
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `id` | `string` | required | Task identifier |
| `run` | `(payload: ChatTaskRunPayload) => Promise<unknown>` | required | Handler for each turn |
| `onChatStart` | `(event: ChatStartEvent) => Promise<void> \| void` || Fires on turn 0 before `run()` |
| `onTurnStart` | `(event: TurnStartEvent) => Promise<void> \| void` || Fires every turn before `run()` |
| `onTurnComplete` | `(event: TurnCompleteEvent) => Promise<void> \| void` || Fires after each turn completes |
| `maxTurns` | `number` | `100` | Max conversational turns per run |
| `turnTimeout` | `string` | `"1h"` | How long to wait for next message |
| `warmTimeoutInSeconds` | `number` | `30` | Seconds to stay warm before suspending |
| `chatAccessTokenTTL` | `string` | `"1h"` | How long the scoped access token remains valid |
Plus all standard [TaskOptions](/tasks/overview)`retry`, `queue`, `machine`, `maxDuration`, etc.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/ai-chat.mdx` around lines 804 - 817, The docs table for
ChatTaskOptions is missing the chatAccessTokenTTL entry; update the
ChatTaskOptions table in docs/guides/ai-chat.mdx to include a row for
`chatAccessTokenTTL` (property on the ChatTaskOptions type) with Type `string`,
Default `"1h"`, and a short Description like "TTL for generated chat access
tokens" so it matches the `chatAccessTokenTTL` field defined in the
ChatTaskOptions type in ai.ts.

…nd reference project enhancements

- Fix onFinish race condition: await onFinishPromise so capturedResponseMessage is set before accumulation
- Add chat.isStopped() helper accessible from anywhere during a turn
- Add chat.cleanupAbortedParts() to remove incomplete tool/reasoning/text parts on stop
- Auto-cleanup aborted parts before passing to onTurnComplete
- Clean incoming messages from frontend to prevent tool_use without tool_result API errors
- Add stopped and rawResponseMessage fields to TurnCompleteEvent
- Add continuation and previousRunId fields to all lifecycle hooks and run payload
- Add span attributes (chat.id, chat.turn, chat.stopped, chat.continuation, chat.previous_run_id, etc.)
- Add webFetch tool and reasoning model support to ai-chat reference project
- Render reasoning parts in frontend chat component
- Document all new fields in ai-chat guide
…e, per-chat model persistence, debug panel

- Export chat.stream (typed RealtimeDefinedStream<UIMessageChunk>) for writing custom data to the chat stream
- Add deepResearch subtask using data-* chunks to stream progress back to parent chat via target: root
- Use AI SDK data-research-progress chunk protocol with id-based updates for live progress
- Add ResearchProgress component and generic data-* fallback renderer in frontend
- Persist model per chat in DB (schema + onChatStart), model selector only on new chats
- Add collapsible debug panel showing run ID (with dashboard link), chat ID, model, status, session info
- Document chat.stream API, data-* chunks, and subtask streaming pattern in docs
…chatContext helpers

- Store chat turn context (chatId, turn, continuation, clientData) in locals for auto-detection
- toolFromTask now auto-detects chat context and passes it to subtask metadata
- Skip serializing messages array (can be large, rarely needed by subtasks)
- Tag subtask runs with toolCallId for dashboard visibility
- Add ai.toolCallId() convenience helper
- Add ai.chatContext<typeof myChat>() with typed clientData inference
- Add ai.chatContextOrThrow<typeof myChat>() that throws if not in a chat context
- Update deepResearch example to use ai.chatContextOrThrow
- Document all helpers in ai-chat guide
…timeouts

- Add onPreload hook and preloaded field to all lifecycle events
- Add transport.preload(chatId) for eagerly starting runs before first message
- Add preloadWarmTimeoutInSeconds and preloadTimeout task options
- Add preload:true run tag and chat.preloaded span attributes
- Add UserTool model for per-user dynamic tools loaded from DB
- Load dynamic tools in onPreload/onChatStart via chat.local
- Build dynamicTool() instances in run and spread into streamText tools
- Reference project: preload on new chat, dynamic company-info and user-preferences tools
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants